28 research outputs found

    Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals

    Get PDF
    Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.Comment: 24 pages, 8 figure

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems

    Full text link
    This article presents numerical recipes for simulating high-temperature and non-equilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto a state-space manifold having reduced dimensionality and possessing a Kahler potential of multi-linear form. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low-dimensionality Kahler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given, and methods for quantum state optimization by Dantzig selection are given.Comment: 104 pages, 13 figures, 2 table

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages

    Convex optimization for minimized sampling

    Get PDF
    Issued as final reportNorthrop Corporatio

    Multiscale geometric image processing

    No full text
    Since their introduction a little more than 10 years ago, wavelets have revolutionized image processing. Wavelet based algorithms define the state-of-the-art for applications including image coding (JPEG-2000), restoration, and segmentation. Despite their success, wavelets have significant shortcomings in their treatment of edges. Wavelets do not parsimoniously capture even the simplest geometrical structure in images, and wavelet based processing algorithms often produce images with ringing around the edges. As a first step towards accounting for this structure, we will show how to explicitly capture the geometric regularity of contours in cartoon images using the wedgelet representation and a multiscale geometry model. The wedgelet representation builds up an image out of simple piecewise constant functions with linear discontinuities. We will show how the geometry model, by putting a joint distribution on the orientations of the linear discontinuities, allows us to weigh several factors when choosing the wedgelet representation: the error between the representation and the original image, the parsimony of the representation, and whether the wedgelets in the representation form "natural" geometrical structures. We will analyze a simple wedgelet coder based on these principles, and show that it has optimal asymptotic performance for simple cartoon images. Next, we turn our attention to piecewise smooth images; images that are smooth away from a smooth contour. Using a representation composed of wavelets and wedgeprints (wedgelets projected into the wavelet domain), we develop a quadtree based prototype coder whose rate-distortion performance is asymptotically near-optimal. We use these ideas to implement a full-scale image coder that outperforms JPEG-2000 both in peak signal to noise ratio (by 1--1.5dB at low bitrates) and visually. Finally, we shift our focus to building a statistical image model directly in the wavelet domain. For applications other than compression, the approximate shift-invariance and directional selectivity of the slightly redundant complex wavelet transform make it particularly well-suited for modeling singularity structure. Around edges in images, complex wavelet coefficients behave very predictably, exhibiting dependencies that we will exploit using a hidden Markov tree model. We demonstrate the effectiveness of the complex wavelet model with several applications: image denoising, multiscale segmentation, and feature extraction

    A universal hidden Markov tree image model

    No full text
    Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training. We propose two reduced-parameter HMT models that capture the general structure of a broad class of real-world images. In the image HMT model, we use the fact that for real-world images the structure of the HMT is self-similar across scale, allowing us to reduce the complexity of the model to just nine parameters. In the universal HMT we fix these nine parameters, eliminating training while retaining nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT estimation algorithm that outperforms all other wavelet-based estimators in the current literature

    Superfast Tikhonov Regularization of Toeplitz Systems

    No full text
    corecore